Goto

Collaborating Authors

 British Columbia


I Went to See What's Happened to the Home of the TED Talk. It Was a Little Terrifying.

Slate

Meanwhile its Audacious Project --a funding initiative that gives mature nonprofits the opportunity to pitch "moonshot" plans to a coalition of philanthropists--has raised over $1 billion in each of the last two years, in an epic Robin Hood operation for a handful of large-scale projects on climate, health, education, and criminal justice: The Audacious recipients here this year are taking this brief break from their work preventing 16 million unsafe abortions, helping governments in 20 countries prevent lead poisoning, or intercepting 5 percent of the world's river-borne plastic before it reaches the ocean.


Canadian officials claim OpenAI violated federal and provincial privacy laws

Engadget

Philippe Dufresne, the Privacy Commissioner of Canada, has found OpenAI was not compliant with Canadian federal and provincial privacy laws in the training of its AI models. Following an investigation, Dufresne and his counterparts in Alberta, Quebec and British Columbia say OpenAI's approach to things like data collection and consent stepped on multiple laws, including Canada's Personal Information Protection and Electronic Documents Act (PIPEDA), which governs how companies collect and use personal information during the normal course of business. The commissioners participating in the investigation identified multiple privacy issues with OpenAI's approach, including that the company gathered vast amounts of personal information without adequate safeguards to prevent use of that information to train its models, and that it failed to acquire consent to collect and use that personal information in the first place. Warnings in ChatGPT note that interactions with the AI could be used in training, but third-party data OpenAI has purchased or scraped also includes personal details people likely aren't even aware of. The fact that ChatGPT users have no way to access, correct or delete that data was another issue that the commissioners identified, according to a summary of the investigation's findings, along with OpenAI's lackluster attempts to acknowledge the inaccuracy of some of ChatGPT's responses.


Families sue OpenAI, alleging chatbot aided in Canadian school shooting

Al Jazeera

The families of victims of a school shooting in a remote Canadian Rockies town are suing artificial intelligence company OpenAI in a United States federal court, alleging that the ChatGPT maker failed to alert police to the shooter's alarming interactions with the chatbot. A lawsuit filed on Wednesday on behalf of 12-year-old Maya Gebala, who was critically injured in the February shooting, is among the first of more than two dozen cases from families in Tumbler Ridge, British Columbia, in what their lawyers say represents "an entire community stepping forward to hold OpenAI accountable". The cases represent the families of the five slain children targeted in the school shooting. Those include Zoey Benoit, Abel Mwansa Jr, Ticaria "Tiki" Lampert, Kylie Smith, all 12, and Ezekiel Schofield, 13, as well as education assistant Shannda Aviugana-Durand. Jesse Van Rootselaar, whose interactions with ChatGPT are at the centre of the lawsuits, shot her mother and stepbrother at home before killing an educational assistant and five students aged 12 to 13 at her former school on February 10, according to police.


Victims Allege OpenAI Is Responsible for Mass Shooting

Mother Jones

A new lawsuit underscores key questions about the Tumbler Ridge killer's use of ChatGPT. A community vigil in Tumbler Ridge two days after the rural community experienced one of Canada's deadliest shootings Paige Taylor White/AFP/Getty Get your news from a source that's not owned and controlled by oligarchs. Victims of the Tumbler Ridge mass shooting and their families sued OpenAI and its CEO, Sam Altman, in US district court in San Francisco on Wednesday, claiming various negligence, product liability, and other violations. The civil complaints are the latest in a wave of litigation against OpenAI alleging that its globally popular chatbot, ChatGPT, helped people commit lethal violence. The complaints were filed by families of multiple victims wounded and killed at Tumbler Ridge Secondary School in British Columbia, Canada, where a suicidal 18-year-old opened fire on February 10.


OpenAI's Sam Altman apologizes for not reporting ChatGPT account of Tumbler Ridge suspect to police

Engadget

OpenAI's Sam Altman apologizes for not reporting ChatGPT account of Tumbler Ridge suspect to police Altman penned a letter addressed to the community of Tumbler Ridge, two months following the mass shooting incident. Two months following the deadly shooting in Tumbler Ridge, British Columbia, OpenAI's Sam Altman has formally apologized for not informing police of the alarming ChatGPT conversations seen with the suspect's account. Before the incident, OpenAI banned the account belonging to the alleged shooter, Jesse Van Rootselaar, for violating its usage policy due to potential for real-world violence. I am deeply sorry that we did not alert law enforcement to the account that was banned in June, Altman wrote in the letter. While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered.


OpenAI's Sam Altman apologises over failure to report Canadian mass shooter

Al Jazeera

OpenAI's Sam Altman apologises over failure to report Canadian mass shooter OpenAI CEO Sam Altman has apologised over his company's failure to warn authorities about the concerning online activities of a teen who went on to commit one of Canada's worst mass shooting s. Jesse Van Rootselaar, 18, went on a shooting spree in Tumbler Ridge, British Columbia, on February 10, killing eight people. Rootselaar, who was born male but identified as female, died of a self-inflicted gunshot wound. OpenAI said after the attacks that Rootselaar's ChatGPT account had been flagged internally the previous June for misuse "in furtherance of violent activities", resulting in its suspension. The San Francisco-based AI company said at the time that it had not informed authorities, as Rootselaar's usage of the chatbot had not met the threshold of posing a credible or imminent threat of harm to others.


Decentralized Machine Learning with Centralized Performance Guarantees via Gibbs Algorithms

Bermudez, Yaiza, Perlaza, Samir, Esnaola, Iñaki

arXiv.org Machine Learning

In this paper, it is shown, for the first time, that centralized performance is achievable in decentralized learning without sharing the local datasets. Specifically, when clients adopt an empirical risk minimization with relative-entropy regularization (ERM-RER) learning framework and a forward-backward communication between clients is established, it suffices to share the locally obtained Gibbs measures to achieve the same performance as that of a centralized ERM-RER with access to all the datasets. The core idea is that the Gibbs measure produced by client~$k$ is used, as reference measure, by client~$k+1$. This effectively establishes a principled way to encode prior information through a reference measure. In particular, achieving centralized performance in the decentralized setting requires a specific scaling of the regularization factors with the local sample sizes. Overall, this result opens the door to novel decentralized learning paradigms that shift the collaboration strategy from sharing data to sharing the local inductive bias via the reference measures over the set of models.


Discrete Tilt Matching

Chen, Yuyuan, Wang, Shiyi, Potaptchik, Peter, Kim, Jaeyeon, Albergo, Michael S.

arXiv.org Machine Learning

Masked diffusion large language models (dLLMs) are a promising alternative to autoregressive generation. While reinforcement learning (RL) methods have recently been adapted to dLLM fine-tuning, their objectives typically depend on sequence-level marginal likelihoods, which are intractable for masked diffusion models. To address this, we derive Discrete Tilt Matching (DTM), a likelihood-free method that recasts dLLM fine-tuning as state-level matching of local unmasking posteriors under reward tilting. DTM takes the form of a weighted cross-entropy objective with explicit minimizer, and admits control variates that improve training stability. On a synthetic maze-planning task, we analyze how DTM's annealing schedule and control variates affect training stability and prevent mode collapse. At scale, fine-tuning LLaDA-8B-Instruct with DTM yields strong gains on Sudoku and Countdown while remaining competitive on MATH500 and GSM8K.


Local Linearity of LLMs Enables Activation Steering via Model-Based Linear Optimal Control

Skifstad, Julian, Yang, Xinyue Annie, Chou, Glen

arXiv.org Machine Learning

Inference-time LLM alignment methods, particularly activation steering, offer an alternative to fine-tuning by directly modifying activations during generation. Existing methods, however, often rely on non-anticipative interventions that ignore how perturbations propagate through transformer layers and lack online error feedback, resulting in suboptimal, open-loop control. To address this, we show empirically that, despite the nonlinear structure of transformer blocks, layer-wise dynamics across multiple LLM architectures and scales are well-approximated by locally-linear models. Exploiting this property, we model LLM inference as a linear time-varying dynamical system and adapt the classical linear quadratic regulator to compute feedback controllers using layer-wise Jacobians, steering activations toward desired semantic setpoints in closed-loop with minimal computational overhead and no offline training. We also derive theoretical bounds on setpoint tracking error, enabling formal guarantees on steering performance. Using a novel adaptive semantic feature setpoint signal, our method yields robust, fine-grained behavior control across models, scales, and tasks, including state-of-the-art modulation of toxicity, truthfulness, refusal, and arbitrary concepts, surpassing baseline steering methods. Our code is available at: https://github.com/trustworthyrobotics/lqr-activation-steering


OpenAI faces criminal probe over role of ChatGPT in shooting

BBC News

OpenAI is facing a criminal investigation in the US over whether its ChatGPT technology played a part in the murder of two people during a mass shooting at Florida State University last year. Florida's Attorney General James Uthmeier said on Tuesday his office had been looking into the use of the artificial intelligence (AI) chatbot by a man who allegedly shot several people at the campus in Tallahassee. Our review has revealed that a criminal investigation is necessary, Uthmeier said. ChatGPT offered significant advice to this shooter before he committed such heinous crimes. An OpenAI spokesperson said: ChatGPT is not responsible for this terrible crime.